可以提前以低虚假警报率预测不良事件的模型对于接受医学界的决策支持系统至关重要。这项具有挑战性的机器学习任务通常仍被视为简单的二进制分类,并提出了一些定制方法来利用样本之间的时间依赖性。我们提出了时间标签平滑(TLS),这是一种新颖的学习策略,可调节平滑强度,这是与感兴趣的事件接近的函数。这种正则化技术降低了在类边界上的模型置信度,在该阶级边界中,信号通常是嘈杂或不信息的,因此训练可以集中在远离该边界区域的临床信息丰富的数据点上。从理论的角度来看,我们还表明,我们的方法可以作为多屈曲预测的扩展,这是在其他早期预测工作中提出的学习启发式词。 TLS从经验上匹配或跑赢大盘,考虑了各种早期预测基准任务的竞争方法。特别是,我们的方法可显着提高与临床相关的指标的性能,例如以低弹药率以较低的事件召回。
translated by 谷歌翻译
Optimization of directed acyclic graph (DAG) structures has many applications, such as neural architecture search (NAS) and probabilistic graphical model learning. Encoding DAGs into real vectors is a dominant component in most neural-network-based DAG optimization frameworks. Currently, most DAG encoders use an asynchronous message passing scheme which sequentially processes nodes according to the dependency between nodes in a DAG. That is, a node must not be processed until all its predecessors are processed. As a result, they are inherently not parallelizable. In this work, we propose a Parallelizable Attention-based Computation structure Encoder (PACE) that processes nodes simultaneously and encodes DAGs in parallel. We demonstrate the superiority of PACE through encoder-dependent optimization subroutines that search the optimal DAG structure based on the learned DAG embeddings. Experiments show that PACE not only improves the effectiveness over previous sequential DAG encoders with a significantly boosted training and inference speed, but also generates smooth latent (DAG encoding) spaces that are beneficial to downstream optimization subroutines. Our source code is available at \url{https://github.com/zehao-dong/PACE}
translated by 谷歌翻译